3 research outputs found
Bringing Human Robot Interaction towards _Trust and Social Engineering
Robots started their journey in books and movies; nowadays, they are becoming an
important part of our daily lives: from industrial robots, passing through entertainment
robots, and reaching social robotics in fields like healthcare or education.
An important aspect of social robotics is the human counterpart, therefore, there is
an interaction between the humans and robots. Interactions among humans are often
taken for granted as, since children, we learn how to interact with each other. In robotics,
this interaction is still very immature, however, critical for a successful incorporation of
robots in society. Human robot interaction (HRI) is the domain that works on improving
these interactions.
HRI encloses many aspects, and a significant one is trust. Trust is the assumption that
somebody or something is good and reliable; and it is critical for a developed society.
Therefore, in a society where robots can part, the trust they could generate will be essential
for cohabitation.
A downside of trust is overtrusting an entity; in other words, an insufficient alignment
of the projected trust and the expectations of a morally correct behaviour. This effect
could negatively influence and damage the interactions between agents. In the case of
humans, it is usually exploited by scammers, conmen or social engineers - who take
advantage of the people's overtrust in order to manipulate them into performing actions
that may not be beneficial for the victims.
This thesis tries to shed light on the development of trust towards robots, how this
trust could become overtrust and be exploited by social engineering techniques. More
precisely, the following experiments have been carried out: (i) Treasure Hunt, in which
the robot followed a social engineering framework where it gathered personal
information from the participants, improved the trust and rapport with them, and at the
end, it exploited that trust manipulating participants into performing a risky action.
(ii) Wicked Professor, in which a very human-like robot tried to enforce its authority to
make participants obey socially inappropriate requests. Most of the participants realized
that the requests were morally wrong, but eventually, they succumbed to the robot'sauthority while holding the robot as morally responsible. (iii) Detective iCub, in which it
was evaluated whether the robot could be endowed with the ability to detect when the
human partner was lying. Deception detection is an essential skill for social engineers and
professionals in the domain of education, healthcare and security. The robot achieved
75% of accuracy in the lie detection. There were also found slight differences in the
behaviour exhibited by the participants when interacting with a human or a robot
interrogator.
Lastly, this thesis approaches the topic of privacy - a fundamental human value. With
the integration of robotics and technology in our society, privacy will be affected in ways
we are not used. Robots have sensors able to record and gather all kind of data, and it is
possible that this information is transmitted via internet without the knowledge of the
user. This is an important aspect to consider since a violation in privacy can heavily
impact the trust.
Summarizing, this thesis shows that robots are able to establish and improve trust
during an interaction, to take advantage of overtrust and to misuse it by applying different
types of social engineering techniques, such as manipulation and authority. Moreover,
robots can be enabled to pick up different human cues to detect deception, which can
help both, social engineers and professionals in the human sector. Nevertheless, it is of
the utmost importance to make roboticists, programmers, entrepreneurs, lawyers,
psychologists, and other sectors involved, aware that social robots can be highly beneficial
for humans, but they could also be exploited for malicious purposes
Expectations Vs. Reality: Unreliability and Transparency in a Treasure Hunt Game With Icub
Trust is essential in human-robot interactions, and in times where machines are yet to be fully reliable, it is important to study how robotic hardware faults can affect the human counterpart. This experiment builds on a previous research that studied trust changes in a game-like scenario with the humanoid robot iCub. Several robot hardware failures (validated in another online study) were introduced in order to measure changes in trust due to the unreliability of the iCub. A total of 68 participants took part in this study. For half of them, the robot adopted a transparent approach, explaining each failure after it happened. Participants' behaviour was also compared to the 61 participants that played the same game with a fully reliable robot in the previous study. Against all expectations, introducing manifest hardware failures does not seem to significantly affect trust, while transparency mainly deteriorates the quality of interaction with the robot
Herramienta para la gesti贸n de pr谩cticas y portfolio en la Facultad de Enfermer铆a, Fisioterapia y Podolog铆a de la Universidad Complutense de Madrid
Dado el enorme crecimiento de estudiantes en las titulaciones de Enfermer铆a, Fisioterapia y Podolog铆a en los 煤ltimos a帽os en la Universidad Complutense de Madrid, y debido a las carencias funcionales de los sistemas de gesti贸n acad茅mica en cuanto al seguimiento de pr谩cticas formativas se refiere, el procesamiento de la documentaci贸n oficial por parte de la gesti贸n administrativa de estas Facultades se ha convertido en
tarea ardua y costosa.
Actualmente, titulaciones con amplia formaci贸n pr谩ctica en instituciones ajenas a la Universidad, se ve铆an obligadas a llevar un control exhaustivo independiente, en muchos casos en papel, debido a la falta de una aplicaci贸n espec铆fica que satisficiese estas necesidades tambi茅n demasiado concretas. La diferencia entre el seguimiento de un alumno en el aula, en actividades en un entorno cercano a ella y con el mismo personal docente, y el seguimiento cuando el educador es diferente al evaluador o ambos colaboran para extraer una valoraci贸n final, se refleja en necesidades de coordinaci贸n
determinadas.
Para la combinaci贸n coherente de las calificaciones extra铆das del 谩mbito de ense帽anza en el aula y de las pr谩cticas realizadas por el alumno en los centros adscritos para ello, se utilizaba hasta ahora el Campus Virtual de la Universidad Complutense de Madrid, junto con las evaluaciones en papel aportadas por los profesores adjuntos en los centros asociados.
En este Proyecto Fin de Carrera, se ha dise帽ado e implementado una herramienta 煤til, intuitiva y completa para la gesti贸n de pr谩cticas y portfolio docente en la Facultad de Enfermer铆a, Fisioterapia y Podolog铆a de la Universidad Complutense de Madrid, consistente en una aplicaci贸n web, accesible por toda la comunidad universitaria y que servir谩 como estrategia metodol贸gica de reflexi贸n seguimiento y evaluaci贸n del proceso
de ense帽anza-aprendizaje del estudiante. Siguiendo la clasificaci贸n de tipos de portafolio propuesta por Charlotte Danielson y Leslye Abrutyn [1] la aplicaci贸n se basa en un Portafolio de Evaluaci贸n Diagn贸stica, cuya finalidad es documentar lo que ha aprendido el estudiante en relaci贸n con objetivos curriculares espec铆ficos